For most of us, the word “algorithm” is fairly new to our vocabulary. But badly designed decision-making algorithms have a growing impact on our lives and can do a great deal of damage.
Simply put, an algorithm is a set of instructions used by computer systems to perform a task or make a decision. On social media platforms, for example, algorithms decide what ads appear based on what content a user looks at, likes or shares.
As we discovered in a new Greenlining Institute report on algorithmic bias, these algorithms may be used to decide everything from whether someone gets a job interview or mortgage, to how heavily one’s neighborhood is policed.
“Poorly designed algorithms,” we wrote, “threaten to amplify systemic racism by reproducing patterns of discrimination and bias that are found in the data algorithms use to learn and make decisions.”
Algorithms can be put to good use, such as helping manage responses to the COVID-19 pandemic, but things can also go seriously wrong. Sometimes, algorithms replicate the conscious or unconscious biases of the humans who designed them, disadvantaging whole groups of people, often without them even knowing it’s happening.
Like humans, algorithms “learn” — in the latter case through what’s called training data, which teaches the algorithm to look for patterns in bits of information. That’s where things can start to go wrong.
Consider a bank whose historical lending data shows that it routinely gave higher interest rates to people in a ZIP code with a majority of Black residents. An algorithm trained on that biased data could learn to overcharge residents in that area.
In 2014, Amazon tried to develop a recruiting algorithm to rate the resumes of job candidates and predict who would do well. But, even though gender was not intended as a factor in the algorithm, it still favored men and penalized resumes that included the names of all-women’s colleges. This likely happened because Amazon had a poor record of hiring and promoting women, causing the training data used for the algorithm to repeat the pattern.
Happily, Amazon’s researchers caught the problem and, when they found they couldn’t fix it, scrapped the algorithm. But how many such situations have gone unnoticed and uncorrected? No one knows.
Worse, our laws have not caught up with this new, insidious form of discrimination. While both federal and state governments have anti-discrimination laws, they’re ineffective in this situation, since most were written before the internet was even invented. And proving algorithmic bias is difficult since the people being discriminated against may not know why or how the decision that harmed them was made.
Our anti-discrimination laws must be updated to properly regulate algorithmic bias and discrimination, with provisions to promote transparency. California’s legislature is leading the way by considering legislation that would bring more transparency and accountability to algorithms used in government programs.
Government at all levels should pay much more attention to this new, insidious form of discrimination.
Vinhcent Le is technology equity legal counsel at The Greenlining Institute.